dir.to.work = "C:\\Users\\Yoav\\Documents\\gdrive.mail.tau.ac.il\\Other computers\\My Laptop\\"
source(paste0(dir.to.work, "work\\resources\\stasExamples\\R\\yba.funcs.R"))
# The data's folder (directory)
dir.to.data = "C:\\Users\\Yoav\\OneDrive - Tel-Aviv University\\Documents\\bigfiles\\"
dir.out <- paste0(dir.to.data, "ageformva\\processed")
In previous research, participants who read behavioral evaluative information about a young man and an old man showed a strong pro-young bias in their automatic evaluation of those targets.
In this image, we see the preference for a positively-portrayed
target over a negatively-portrayed target, in the self-repotr evaluation
(on the left) and in the IAT (on the right). The x-axis is the
behavioral information condition. For example, when x is 10, it means
that the positively-portrayed target was characterized by 13 positive
and 5 negative behaviors, whereas the negatively-portrayed was
characterized by 13 negative and 5 positive behaviors.
In the present experiment, we will
describe the positively-portrayed target with 8 positive and 4 negative
behaviors, and the opposite for the negatively-portrayed target. That’s
a difference about 8 behaviors.
The purpose of the present experiment was to investigate possible causes for the group effect. We will try to estimate:
Overview of the procedure:
Baseline. Part 1: Only names; Part 3: Only names. (age groups are provided only in the name-photo matching task).
Age1. Part 1: Name + age; Part 3: Only names.
*Photo1.** Part 1: Name + age + photos; Part 3: Only names.
Photo2. Part 1: Name + age; Part 3: Names+Photos.
PhotoEnd. Part 1: Only names; Part 3: Names+Photos.
Behavioural information manipulation: We manipulated between participants whether the old man or the young man was the positively portrayed target (there was always one positively-portrayed and one negatively-portrayed target).
Measures:
allds <- read.csv(paste(dir.out, "allds.csv", sep = "\\"))
allok <- read.csv(paste(dir.out, "allok.csv", sep = "\\"))
allok$positive <- ifelse(allok$oldVlnc == "g", "old", ifelse(allok$oldVlnc == "b", "yng", NA))
allok$iat.order2 <- ifelse(allok$yngGd.first & !allok$oldGd.first, "yg.first", ifelse(!allok$yngGd.first & allok$oldGd.first, "og.first", ifelse(allok$yngGd.first & allok$oldGd.first, NA, "error")))
allok$iat.order1 <- ifelse(allok$iat.order2 == "yg.first", ifelse(allok$positive == "yng", "gFirst", ifelse(allok$positive == "old", "bFirst", NA)), ifelse(allok$iat.order2 == "og.first", ifelse(allok$positive == "old", "gFirst", ifelse(allok$positive == "yng", "bFirst", NA)), NA))
allok$condition <- factor(paste(allok$part1Cond, allok$part3Cond, sep = ","))
# levels(allok$condition)
allok$cond <- mapvalues(allok$condition, from = c("gndrCategory", "evalCategory"), to = c("Gender", "Valence"))
How many reached the final task (the memory test), out of those who started the study
allds$started <- !is.na(allds$task4)
allds$completed <- !is.na(allds$task10)
my.freq(allds$completed[which(allds$started)])
Of the 1431 participants who started the study, 1093 (76%) completed.
The planned exclusions:
We will exclude participants with poor performance in the IAT (above 10% fast trials, Greenwald et al., 2003).
We will exclude participants with less than 70% accuracy rate in the last 20 trials of the matching task.
Participants who did not respond to all the questions in a specific questionnaire will be excluded from all the analyses that include that questionnaire, but not from other analyses.
IAT performance (0 means ok)
my.freq(allds$SUBEXCL)
38 did not perform the IAT well, but probably some of them did not complete the study, dropping out during the IAT (those with a value of 2). So, let’s see the IAT performance status, only among participants who reached the final task:
my.freq(allds$SUBEXCL[which(allds$completed)])
So, we can say that close to 2% of the participnats (20) performed poorly. This is a bit higher than the usual (which is about 1%).
Accuracy in the matching task:
mysum(allds$match.acc, probs = c(0.05, 0.036))
n M SD SE med 5% 3.6%
1162.000 0.936 0.102 0.002 0.958 0.750 0.667
People were generally accurate, with less than 4% performing too poorly.
We also excluded participants with data that suggested a technical mistake (the conditions were not saved correctly)
Overall, here are the number of exclusions and inclusions:
pts <- list()
pts$n.started <- sum(allds$started)
pts$n.completed <- sum(allds$completed, na.rm = T)
pts$IAT.poor <- sum(allds$SUBEXCL[allds$completed] > 0, na.rm = T)
pts$not.all.evaluations <- sum(allds$n.atts != 12, na.rm = T)
pts$tech.prob <- sum(allds$conds.dup[allds$completed] | !allds$cond.ok[allds$completed], na.rm = T)
pts$match.too.many.errors <- sum(allds$SUBEXCL[allds$completed] > 0, na.rm = T)
pts$total.ok <- nrow(allok)
pts
$n.started
[1] 1431
$n.completed
[1] 1093
$IAT.poor
[1] 20
$not.all.evaluations
[1] 39
$tech.prob
[1] 89
$match.too.many.errors
[1] 20
$total.ok
[1] 925
We planned to collect data from 1000 participants, so the sample is 7.5% smaller than we hoped for.
Age:
mysum(allok$age)
n M SD SE med
925.000 35.806 15.295 0.503 31.000
A plot of the age percentiles:
plot.percentile(allok$age, yTitle = "Age", xTitle = "Percentile")
Probably, the 114 years-old lied.
Gender:
my.freq(allok$gender)
Language. We asked: Which of the following is the most accurate? answers: [‘English is my native language’, ‘English is not my native langauge, but it is my primary language’, ‘English is not my native language, and not my primary language, but I am fluent in English’, ‘I understand some English’ ] coded as 1, 2, 3, and 4 (-999 selected a “decline to answer” button at the bottom of the page)
my.freq(allok$lang)
We did not plan to exclude based on this question, but we could explore in the Extended Results section, whether the results change based on this question.
The IAT preference for the positively-portrayed target over the negatively-portrayed target, as a function of the group affiliation of the positively-portrayed target, and the induction and measurement conditions.
Violin plot:
my.violin(DV = allok$IAT.gb, xFactor = allok$condition, fillFactor = allok$positive) + theme(axis.text.x = element_text(size = 11))
(The rectangle is the median, the circle is the mean, and the error-bars
are for SE). The difference between the two groups (old vs. yng) is the
indication of age-bias. It seems clear only in the name+age,name+photo
condition.
Descriptive Statistics:
mysumBy(IAT.gb ~ part1Cond + part3Cond + positive, dt = allok)
Including the IAT’s block order condition (whether the positively portrayed was paired first with Good words or Bad words in the IAT):
knitr::kable(mysumBy(IAT.gb ~ iat.order1 + part1Cond + part3Cond + positive, dt = allok))
| var | iat.order1 | part1Cond | part3Cond | positive | n | M | SD | SE | med |
|---|---|---|---|---|---|---|---|---|---|
| IAT.gb | bFirst | name-only | name-only | old | 50 | 0.216 | 0.440 | 0.062 | 0.196 |
| IAT.gb | bFirst | name-only | name-only | yng | 48 | 0.186 | 0.355 | 0.051 | 0.190 |
| IAT.gb | bFirst | name-only | name+photo | old | 47 | 0.181 | 0.349 | 0.051 | 0.215 |
| IAT.gb | bFirst | name-only | name+photo | yng | 45 | 0.183 | 0.325 | 0.048 | 0.144 |
| IAT.gb | bFirst | name+age | name-only | old | 52 | 0.038 | 0.362 | 0.050 | 0.004 |
| IAT.gb | bFirst | name+age | name-only | yng | 42 | 0.171 | 0.321 | 0.050 | 0.139 |
| IAT.gb | bFirst | name+age | name+photo | old | 47 | 0.042 | 0.298 | 0.044 | 0.083 |
| IAT.gb | bFirst | name+age | name+photo | yng | 46 | 0.308 | 0.348 | 0.051 | 0.321 |
| IAT.gb | bFirst | name+age+photo | name-only | old | 48 | 0.274 | 0.356 | 0.051 | 0.208 |
| IAT.gb | bFirst | name+age+photo | name-only | yng | 48 | 0.176 | 0.391 | 0.056 | 0.163 |
| IAT.gb | gFirst | name-only | name-only | old | 39 | -0.062 | 0.392 | 0.063 | -0.080 |
| IAT.gb | gFirst | name-only | name-only | yng | 43 | 0.043 | 0.427 | 0.065 | 0.029 |
| IAT.gb | gFirst | name-only | name+photo | old | 40 | -0.109 | 0.320 | 0.051 | -0.127 |
| IAT.gb | gFirst | name-only | name+photo | yng | 37 | 0.092 | 0.357 | 0.059 | 0.103 |
| IAT.gb | gFirst | name+age | name-only | old | 56 | -0.126 | 0.351 | 0.047 | -0.083 |
| IAT.gb | gFirst | name+age | name-only | yng | 48 | -0.056 | 0.350 | 0.051 | -0.067 |
| IAT.gb | gFirst | name+age | name+photo | old | 36 | -0.089 | 0.384 | 0.064 | -0.105 |
| IAT.gb | gFirst | name+age | name+photo | yng | 53 | -0.014 | 0.396 | 0.054 | 0.012 |
| IAT.gb | gFirst | name+age+photo | name-only | old | 51 | 0.028 | 0.458 | 0.064 | -0.095 |
| IAT.gb | gFirst | name+age+photo | name-only | yng | 49 | 0.049 | 0.416 | 0.059 | 0.038 |
There was a clear “reverse” block-order effect. That occurs when the expected score (preference for good) is stronger when that the compatible pairing (in our case, good person + good words) comes at the end, rather than at the beginning of the IAT. The hypothesized reason for such an effect is that the practicing on the sides change of the target categories (in Block 5) is too lonq.
When the negatively-portrayed target individual was paired with Good words in Blocks 3 & 4 of the IAT (i.e., first), there was a pro-young bias only in the conditions [name+age, name-only] & (especially in) [name+age, name+photo]. When the positively-portrayed target individual was paired with Bad words in Blocks 3 & 4 of the IAT (i.e., first), the larger pro-young bias was in the condition [name-only, name+photo]. This inconsistency is not very reassuring.
Let’s see it by the complete condition:
knitr::kable(mysumBy(IAT.gb ~ iat.order1 + positive + condition, dt = allok))
| var | iat.order1 | positive | condition | n | M | SD | SE | med |
|---|---|---|---|---|---|---|---|---|
| IAT.gb | bFirst | old | name-only,name-only | 50 | 0.216 | 0.440 | 0.062 | 0.196 |
| IAT.gb | bFirst | old | name-only,name+photo | 47 | 0.181 | 0.349 | 0.051 | 0.215 |
| IAT.gb | bFirst | old | name+age,name-only | 52 | 0.038 | 0.362 | 0.050 | 0.004 |
| IAT.gb | bFirst | old | name+age,name+photo | 47 | 0.042 | 0.298 | 0.044 | 0.083 |
| IAT.gb | bFirst | old | name+age+photo,name-only | 48 | 0.274 | 0.356 | 0.051 | 0.208 |
| IAT.gb | bFirst | yng | name-only,name-only | 48 | 0.186 | 0.355 | 0.051 | 0.190 |
| IAT.gb | bFirst | yng | name-only,name+photo | 45 | 0.183 | 0.325 | 0.048 | 0.144 |
| IAT.gb | bFirst | yng | name+age,name-only | 42 | 0.171 | 0.321 | 0.050 | 0.139 |
| IAT.gb | bFirst | yng | name+age,name+photo | 46 | 0.308 | 0.348 | 0.051 | 0.321 |
| IAT.gb | bFirst | yng | name+age+photo,name-only | 48 | 0.176 | 0.391 | 0.056 | 0.163 |
| IAT.gb | gFirst | old | name-only,name-only | 39 | -0.062 | 0.392 | 0.063 | -0.080 |
| IAT.gb | gFirst | old | name-only,name+photo | 40 | -0.109 | 0.320 | 0.051 | -0.127 |
| IAT.gb | gFirst | old | name+age,name-only | 56 | -0.126 | 0.351 | 0.047 | -0.083 |
| IAT.gb | gFirst | old | name+age,name+photo | 36 | -0.089 | 0.384 | 0.064 | -0.105 |
| IAT.gb | gFirst | old | name+age+photo,name-only | 51 | 0.028 | 0.458 | 0.064 | -0.095 |
| IAT.gb | gFirst | yng | name-only,name-only | 43 | 0.043 | 0.427 | 0.065 | 0.029 |
| IAT.gb | gFirst | yng | name-only,name+photo | 37 | 0.092 | 0.357 | 0.059 | 0.103 |
| IAT.gb | gFirst | yng | name+age,name-only | 48 | -0.056 | 0.350 | 0.051 | -0.067 |
| IAT.gb | gFirst | yng | name+age,name+photo | 53 | -0.014 | 0.396 | 0.054 | 0.012 |
| IAT.gb | gFirst | yng | name+age+photo,name-only | 49 | 0.049 | 0.416 | 0.059 | 0.038 |
Let’s try to see which conditions show a pro-young bias, by looking at the pro-positive IAT score (preference for the positively-portrayed), by IAT-order condition and by positive-target’s group condition.
knitr::kable(mysumBy(IAT.gb ~ condition + iat.order1 + positive, dt = allok))
| var | condition | iat.order1 | positive | n | M | SD | SE | med |
|---|---|---|---|---|---|---|---|---|
| IAT.gb | name-only,name-only | bFirst | old | 50 | 0.216 | 0.440 | 0.062 | 0.196 |
| IAT.gb | name-only,name-only | bFirst | yng | 48 | 0.186 | 0.355 | 0.051 | 0.190 |
| IAT.gb | name-only,name-only | gFirst | old | 39 | -0.062 | 0.392 | 0.063 | -0.080 |
| IAT.gb | name-only,name-only | gFirst | yng | 43 | 0.043 | 0.427 | 0.065 | 0.029 |
| IAT.gb | name-only,name+photo | bFirst | old | 47 | 0.181 | 0.349 | 0.051 | 0.215 |
| IAT.gb | name-only,name+photo | bFirst | yng | 45 | 0.183 | 0.325 | 0.048 | 0.144 |
| IAT.gb | name-only,name+photo | gFirst | old | 40 | -0.109 | 0.320 | 0.051 | -0.127 |
| IAT.gb | name-only,name+photo | gFirst | yng | 37 | 0.092 | 0.357 | 0.059 | 0.103 |
| IAT.gb | name+age,name-only | bFirst | old | 52 | 0.038 | 0.362 | 0.050 | 0.004 |
| IAT.gb | name+age,name-only | bFirst | yng | 42 | 0.171 | 0.321 | 0.050 | 0.139 |
| IAT.gb | name+age,name-only | gFirst | old | 56 | -0.126 | 0.351 | 0.047 | -0.083 |
| IAT.gb | name+age,name-only | gFirst | yng | 48 | -0.056 | 0.350 | 0.051 | -0.067 |
| IAT.gb | name+age,name+photo | bFirst | old | 47 | 0.042 | 0.298 | 0.044 | 0.083 |
| IAT.gb | name+age,name+photo | bFirst | yng | 46 | 0.308 | 0.348 | 0.051 | 0.321 |
| IAT.gb | name+age,name+photo | gFirst | old | 36 | -0.089 | 0.384 | 0.064 | -0.105 |
| IAT.gb | name+age,name+photo | gFirst | yng | 53 | -0.014 | 0.396 | 0.054 | 0.012 |
| IAT.gb | name+age+photo,name-only | bFirst | old | 48 | 0.274 | 0.356 | 0.051 | 0.208 |
| IAT.gb | name+age+photo,name-only | bFirst | yng | 48 | 0.176 | 0.391 | 0.056 | 0.163 |
| IAT.gb | name+age+photo,name-only | gFirst | old | 51 | 0.028 | 0.458 | 0.064 | -0.095 |
| IAT.gb | name+age+photo,name-only | gFirst | yng | 49 | 0.049 | 0.416 | 0.059 | 0.038 |
A complete ANOVA:
library(afex)
aov_ez(id = "session_id", dv = "IAT.gb", data = allok, between = c("iat.order1", "condition", "positive"), return = "nice", anova_table = list(es = "pes"))
Only a small effect of bias:
mysumBy(IAT.gb ~ positive, dt = allok)
This effect looks smaller than in previous research (e.g., a beta of .69 in the original study). Notably, the previous research was a [photo-only, photo-only] condition, and without the matching task. Any of these modifications might have reduced the effect of the target’s group affiliation.
The effect of age, within each condition:
aez <- aov_ez(id = "session_id", dv = "IAT.gb", data = allok, between = c("iat.order1", "condition", "positive"))
emm_s.t <- emmeans(aez, pairwise ~ positive | condition)
emm_s.t[[2]]
condition = name-only,name-only:
contrast estimate SE df t.ratio p.value
old - yng -0.0372 0.0559 905 -0.666 0.5056
condition = name-only,name+photo:
contrast estimate SE df t.ratio p.value
old - yng -0.1015 0.0577 905 -1.760 0.0787
condition = name+age,name-only:
contrast estimate SE df t.ratio p.value
old - yng -0.1017 0.0533 905 -1.907 0.0569
condition = name+age,name+photo:
contrast estimate SE df t.ratio p.value
old - yng -0.1708 0.0559 905 -3.056 0.0023
condition = name+age+photo,name-only:
contrast estimate SE df t.ratio p.value
old - yng 0.0385 0.0533 905 0.721 0.4711
Results are averaged over the levels of: iat.order1
The effect was significant only in the name+age,name+photo condition. This might be the most similar to the original method (photo-only, photo-only). It might suggest that it is important to know the group affiliation in the encoding. However, we can’t easily say that there would not be an effect without that, because the [name-only,name+photo] condition might show an effect with a larger sample. Let’s have a look at effects (the estimates) and our main questions. Here are our research questions regarding the IAT:
Is there an age bias in the baseline condition? Definitely not. It is not enough to learn about the age after the encoding, and without showing the age in the IAT.
A comparison of the group effect between Photo1 and Age1 will estimate the effect of visual cues on encoding. name+age+photo,name-only (Photo1) vs. name+age,name-only [Age1] = Well, Photo1 is the only that did not show the bias numerically (a positive estimate), so we will not see evidence here that showing photos during encoding was important. If anything, we will see evidence of the opposite, which would not be easily explained.
A comparison of Photo2 to Age1 will estimate the effect of visual cues on evaluation. name+age,name+photo [Photo2] vs. name+age,name-only [Age1]: The effect was numerically larger in Photo2 than Age1, but probably not significantly so.
A comparison of Age1 to baseline will estimate the effect of mere group knowledge on encoding. name+age,name-only [Age1] vs. name-only,name-only [baseline] = Numerically, it seems that knowing about the age during encoding increased bias, but it is obvious that this difference is not significant.
I haven’t succeeded so far in testing the planned contrasts within the complete ANOVA model. Therefore, for now, we’ll test them with ANOVAs on each pair of conditions (as planned in the plan document). This is a less powerful test (the MSEis likely to be larger), and it doesn’t allow for easy correction of the p-values.
A comparison of Photo2 to Age1 will estimate the effect of visual cues on evaluation. name+age,name+photo [Photo2] vs. name+age,name-only [Age1]
ccc <- allok[which(allok$condition %in% c("name+age,name+photo", "name+age,name-only")), ]
mysumBy(IAT.gb ~ condition + positive, dt = ccc)
aov_ez(id = "session_id", dv = "IAT.gb", data = ccc, between = c("iat.order1", "condition", "positive"), return = "nice", anova_table = list(es = "pes"))
No interaction between condition and the group affiliation of the positively-portrayed target individual on the IAT preference for the positively-portrayed individual over the negatively-portrayed individual. With such a small effect, it is unlikely that we will see a significant effect with any alternative analysis.
A comparison of Age1 to baseline will estimate the effect of mere group knowledge on encoding. name+age,name-only [Age1] vs. name-only,name-only [baseline]
ccc <- allok[which(allok$condition %in% c("name-only,name-only", "name+age,name-only")), ]
mysumBy(IAT.gb ~ condition + positive, dt = ccc)
aov_ez(id = "session_id", dv = "IAT.gb", data = ccc, between = c("iat.order1", "condition", "positive"), return = "nice", anova_table = list(es = "pes"))
No interaction between condition and the group affiliation of the positively-portrayed target individual on the IAT preference for the positively-portrayed individual over the negatively-portrayed. Again, the partial eta square seems too small to amount to a significant difference in any alternative statistical analysis.
A comparison of the group effect between Photo1 and Age1 will estimate the effect of visual cues on encoding. name+age+photo,name-only (Photo1) vs. name+age,name-only [Age1]
ccc <- allok[which(allok$condition %in% c("name+age+photo,name-only", "name+age,name-only")), ]
mysumBy(IAT.gb ~ condition + positive, dt = ccc)
aov_ez(id = "session_id", dv = "IAT.gb", data = ccc, between = c("iat.order1", "condition", "positive"), return = "nice", anova_table = list(es = "pes"))
No interaction between condition and the group affiliation of the positively-portrayed target individual on the IAT preference for the positively-portrayed individual over the negatively-portrayed. This time, perhaps a more powerful statistical analysis would result with a significant difference, although the effect is still very small. Importantly, the difference is odd (stronger bias with the photos of the target individuals), which further reduces the confidence in these results.
Regarding each of the two target individuals, participants responded to 6 questions in the following format: Does the word ATTRIBUTE characterize NAME? In name+photo conditions, the photo of the individual was shown with each question. I will explore these questions further in the “Extended Results” section, but now let’s dive in to the preference score computed from these evaluation questions. We will look at the preference for the positively-portrayed target individual.
my.violin(DV = allok$eval.gb, xFactor = allok$condition, fillFactor = allok$positive) + theme(axis.text.x = element_text(size = 11))
mysumBy(eval.gb ~ part1Cond + part3Cond + positive, dt = allok)
There was always a preference for the positively-portrayed target. In two of the conditions, there seems to be a pro-young bias: in the baseline condition [name-only, name-only] and in the Photo1 [name+age+photo, name-only] conditions. That is unexpected.
Let’s ANOVA in a 5 (condition) x 2 (positive target) ANOVA
aov_ez(id = "session_id", dv = "eval.gb", data = ccc, between = c("condition", "positive"), return = "nice", anova_table = list(es = "pes"))
Well, there is an interaction, but it is mighty small. Let’s see the effect of the targets’ group within each condition:
aez <- aov_ez(id = "session_id", dv = "eval.gb", data = allok, between = c("condition", "positive"))
emm_s.t <- emmeans(aez, pairwise ~ positive | condition)
emm_s.t[[2]]
condition = name-only,name-only:
contrast estimate SE df t.ratio p.value
old - yng -16.626 7.01 915 -2.372 0.0179
condition = name-only,name+photo:
contrast estimate SE df t.ratio p.value
old - yng 2.606 7.24 915 0.360 0.7188
condition = name+age,name-only:
contrast estimate SE df t.ratio p.value
old - yng -3.336 6.71 915 -0.497 0.6192
condition = name+age,name+photo:
contrast estimate SE df t.ratio p.value
old - yng -0.819 7.00 915 -0.117 0.9069
condition = name+age+photo,name-only:
contrast estimate SE df t.ratio p.value
old - yng -20.352 6.72 915 -3.031 0.0025
Let’s also see those effect in a t-test, mostly to see the Cohen’s d and Bayes Factor: In name+age+photo,name-only
ccc <- allok[which(allok$condition == "name+age+photo,name-only"), ]
ttestIS(data = ccc, vars = c("eval.gb"), group = "positive", bf = T, desc = T, effectSize = T)
INDEPENDENT SAMPLES T-TEST
Independent Samples T-Test
───────────────────────────────────────────────────────────────────────────────────────────────────────────────
Statistic error % df p Effect Size
───────────────────────────────────────────────────────────────────────────────────────────────────────────────
eval.gb Student's t -3.431114 194.0000 0.0007345 Cohen's d -0.4901846
Bayes factor₁₀ 34.17799 7.992516e-10
───────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note. Hₐ μ <sub>old</sub> ≠ μ <sub>yng</sub>
Group Descriptives
──────────────────────────────────────────────────────────────────────────
Group N Mean Median SD SE
──────────────────────────────────────────────────────────────────────────
eval.gb old 99 21.74074 10.66667 37.50179 3.769072
yng 97 42.09278 33.33333 45.25380 4.594828
──────────────────────────────────────────────────────────────────────────
In name-only,name-only (the baseline condition)
ccc <- allok[which(allok$condition == "name-only,name-only"), ]
ttestIS(data = ccc, vars = c("eval.gb"), group = "positive", bf = T, desc = T, effectSize = T)
INDEPENDENT SAMPLES T-TEST
Independent Samples T-Test
──────────────────────────────────────────────────────────────────────────────────────────────────────────────
Statistic error % df p Effect Size
──────────────────────────────────────────────────────────────────────────────────────────────────────────────
eval.gb Student's t -2.221616 178.0000 0.0275698 Cohen's d -0.3311995
Bayes factor₁₀ 1.575094 1.184092e-4
──────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note. Hₐ μ <sub>old</sub> ≠ μ <sub>yng</sub>
Group Descriptives
───────────────────────────────────────────────────────────────────────────
Group N Mean Median SD SE
───────────────────────────────────────────────────────────────────────────
eval.gb old 89 15.02996 0.6666667 51.19278 5.426424
yng 91 31.65568 17.00000 49.20688 5.158283
───────────────────────────────────────────────────────────────────────────
So, two of the conditions showed small-to-medium pro-young bias in the self-reported evaluation. What’s odd is that these conditions did not show that pro-young bias in the IAT. This might suggest that the lack of pro-young IAT bias in the name-only IATs reflect worst measurement rather than less bias. That is, perhaps IATs with names only are less reliable/valid in attitude formation studies (because the name was just learned). We could test that speculation by looking at the effect of condition on internal consistency. We might also learn something by looking at correlations between IAT and self-report and at discrepancies, as a function of the condition.
Overall correlation between the evaluation scores. The IAT.prf and eval.diff are preferences for the young target over the old target.
my.htmlTable(cornp(allok[, c("IAT.gb", "eval.gb", "age", "IAT.prf", "eval.diff")]))
| varName____ | IAT.gb____ | eval.gb____ | age____ | IAT.prf____ | |
|---|---|---|---|---|---|
| 1 | eval.gb |
0.171 < .001 925 |
|||
| 2 | age |
-0.013 0.698 925 |
-0.036 0.273 925 |
||
| 3 | IAT.prf |
-0.021 0.52 925 |
-0.039 0.237 925 |
0.007 0.831 925 |
|
| 4 | eval.diff |
-0.008 0.816 925 |
0.001 0.974 925 |
0.021 0.523 925 |
0.237 < .001 925 |
It might seem odd that the IAT.prf/eval.diff correlation is not the same as the IAT.gb/eval.gb correlation, but I verified (in the sanity checks below) that there is no error here. My guess is that that it can happen if the SDs are changed due to the recoding. In any case, the correlations here are not large, but are in the range of the typical IAT/self-report correlation which is about 0.15-0.70, depending on the attitude object.
Let’s see whether the correlation changed drastically between conditions:
my.htmlTable(corby(inData = allok, xs = c("IAT.gb", "IAT.prf"), ys = c("eval.gb", "eval.diff"), by = c("condition")))
| condition____ | varName____ | eval.gb____ | eval.diff____ | |
|---|---|---|---|---|
| 1 | name-only,name-only | IAT.gb |
0.19 0.011 180 |
-0.121 0.106 180 |
| 2 | name-only,name-only | IAT.prf |
-0.1 0.182 180 |
0.27 < .001 180 |
| 3 | name-only,name+photo | IAT.gb |
0.227 0.003 169 |
0.087 0.261 169 |
| 4 | name-only,name+photo | IAT.prf |
0.024 0.753 169 |
0.316 < .001 169 |
| 5 | name+age,name-only | IAT.gb |
0.004 0.957 198 |
-0.027 0.708 198 |
| 6 | name+age,name-only | IAT.prf |
-0.109 0.125 198 |
0.001 0.989 198 |
| 7 | name+age,name+photo | IAT.gb |
0.202 0.006 182 |
0.063 0.396 182 |
| 8 | name+age,name+photo | IAT.prf |
-0.03 0.691 182 |
0.252 < .001 182 |
| 9 | name+age+photo,name-only | IAT.gb |
0.216 0.002 196 |
-0.055 0.446 196 |
| 10 | name+age+photo,name-only | IAT.prf |
0.037 0.602 196 |
0.358 < .001 196 |
The correlation were relatively consistent, excluding the condition ‘name+age,name-only’ which showed zero correlation. I don’t see anything unordinary in that condition, in terms of discrepancy.
Let’s see effect sizes of the pro-young bias, by condition and measure.
effect_sizes <- by(allok, allok$condition, function(x) {
d_IAT <- cohen.d(x$IAT.gb[x$positive == "yng"], x$IAT.gb[x$positive == "old"])
d_eval <- cohen.d(x$eval.gb[x$positive == "yng"], x$eval.gb[x$positive == "old"])
m_IAT.yng <- mean(x$IAT.gb[x$positive == "yng"], na.rm = T)
m_IAT.old <- mean(x$IAT.gb[x$positive == "old"], na.rm = T)
m_eval.yng <- mean(x$eval.gb[x$positive == "yng"], na.rm = T)
m_eval.old <- mean(x$eval.gb[x$positive == "old"], na.rm = T)
c(IAT.d = d_IAT$estimate, IAT.ci = d_IAT$conf.int, eval.d = d_eval$estimate, eval.ci = d_eval$conf.int, IAT.m.yng = m_IAT.yng, IAT.m.old = m_IAT.old, eval.m.yng = m_eval.yng, eval.m.old = m_eval.old)
})
effect_sizes_df <- as.data.frame(round(do.call(rbind, effect_sizes), 2))
effect_sizes_df$IAT.ci <- paste0("[", effect_sizes_df$IAT.ci.lower, " - ", effect_sizes_df$IAT.ci.upper, "]")
effect_sizes_df$eval.ci <- paste0("[", effect_sizes_df$eval.ci.lower, " - ", effect_sizes_df$eval.ci.upper, "]")
effect_sizes_df$IAT.ci.lower <- NULL
effect_sizes_df$IAT.ci.upper <- NULL
effect_sizes_df$eval.ci.lower <- NULL
effect_sizes_df$eval.ci.upper <- NULL
effect_sizes_df[, c("IAT.d", "IAT.ci", "eval.d", "eval.ci", "IAT.m.yng", "IAT.m.old", "eval.m.yng", "eval.m.old")]
There is an overlap in the CI of the effect-size of the self-report (eval) and the IAT score in all the conditions. The only possible exception is Photo1 [name+age+photo,name-only], with the largest pro-young bias in the self-reported evaluation and the smallest (indeed, pro-old) bias in the IAT. Because this is unexpected and not easily explained, I suspect this is a statistical fluke, until replication.
The memory question was: Who appeared with the following
behavior?
<%=questionsData.bv%>
Response options were either the names or the photos of the two targets, and “None of them”.
We presented 4 positive and 4 negative behaviors of each target, and 4 positive and 4 negative novel behaviors (total of 24 behaviors).
Memory accuracy by type of question (who = who was really characterized by this question [y = young, o = old, n = none], vlnc = whether the behavior was positive or negative [p or n]):
mmm <- mymelt(dt = allok, formula = macc.o.neg + macc.o.pos + macc.y.neg + macc.y.pos + macc.n.neg + macc.n.pos ~ session_id + condition + positive)
mmm$who <- substr(mmm$variable, 6, 6)
mmm$vlnc <- substr(mmm$variable, 8, 8)
mysumBy(value ~ who + vlnc, dt = mmm)
Negative behaviors, behaviors performed by the young target, and behaviors that appeared in the induction were remembered better.
With the positively-portrayed group affiliation condition:
knitr::kable(mysumBy(value ~ vlnc + positive + who, dt = mmm))
| var | vlnc | positive | who | n | M | SD | SE | med |
|---|---|---|---|---|---|---|---|---|
| value | n | old | n | 451 | 0.729 | 0.356 | 0.016 | 1.00 |
| value | n | old | o | 451 | 0.774 | 0.289 | 0.013 | 1.00 |
| value | n | old | y | 451 | 0.835 | 0.246 | 0.011 | 1.00 |
| value | n | yng | n | 454 | 0.743 | 0.343 | 0.016 | 1.00 |
| value | n | yng | o | 454 | 0.792 | 0.262 | 0.012 | 1.00 |
| value | n | yng | y | 454 | 0.849 | 0.255 | 0.012 | 1.00 |
| value | p | old | n | 451 | 0.684 | 0.358 | 0.017 | 0.75 |
| value | p | old | o | 451 | 0.713 | 0.280 | 0.013 | 0.75 |
| value | p | old | y | 451 | 0.749 | 0.263 | 0.012 | 0.75 |
| value | p | yng | n | 454 | 0.702 | 0.350 | 0.016 | 0.75 |
| value | p | yng | o | 454 | 0.730 | 0.281 | 0.013 | 0.75 |
| value | p | yng | y | 454 | 0.777 | 0.250 | 0.012 | 0.75 |
Regarding negative behaviors:
Regarding positive behaviors:
So, it simply seems that people had better memory for the young than for the old target, and not sure whether there was any moderation by congruency between the behavior and the majority of behaviors performed by the target.
Let’s look at the same results with a different order, to see whether the positive target group affiliation conditions had any effect:
knitr::kable(mysumBy(value ~ who + positive + vlnc, dt = mmm))
| var | who | positive | vlnc | n | M | SD | SE | med |
|---|---|---|---|---|---|---|---|---|
| value | n | old | n | 451 | 0.729 | 0.356 | 0.016 | 1.00 |
| value | n | old | p | 451 | 0.684 | 0.358 | 0.017 | 0.75 |
| value | n | yng | n | 454 | 0.743 | 0.343 | 0.016 | 1.00 |
| value | n | yng | p | 454 | 0.702 | 0.350 | 0.016 | 0.75 |
| value | o | old | n | 451 | 0.774 | 0.289 | 0.013 | 1.00 |
| value | o | old | p | 451 | 0.713 | 0.280 | 0.013 | 0.75 |
| value | o | yng | n | 454 | 0.792 | 0.262 | 0.012 | 1.00 |
| value | o | yng | p | 454 | 0.730 | 0.281 | 0.013 | 0.75 |
| value | y | old | n | 451 | 0.835 | 0.246 | 0.011 | 1.00 |
| value | y | old | p | 451 | 0.749 | 0.263 | 0.012 | 0.75 |
| value | y | yng | n | 454 | 0.849 | 0.255 | 0.012 | 1.00 |
| value | y | yng | p | 454 | 0.777 | 0.250 | 0.012 | 0.75 |
Still hard to say.
We could get the conditions in, but probably best to see that in a plot. The top heading indicates the group of the postivie target. The next heading indicates the condition. The fill is the valence of the behavior. The x-axis is who was characterized by this behavior in the induction.
my.violin(DV = mmm$value, xFactor = mmm$who, fillFactor = mmm$vlnc, facet1 = mmm$positive, facet2 = mmm$condition)
The patterns might be different sometimes, but it’s still hard to tell.
So, perhaps we need an ANOVA to get a better idea about effects.
aov_ez(id = "session_id", dv = "value", data = mmm, within = c("who", "vlnc"), between = c("condition", "positive"), return = "nice", anova_table = list(es = "pes"))
The main effect of condition:
mysumBy(value ~ condition, dt = mmm)
Better memory when more individuating details were provided about the target in the induction. Providing the photo in the memory test doesn’t seem to improve memory.
Comparing all levels:
m1 <- emmeans(aaa, ~condition)
pairs(m1)
contrast estimate SE df t.ratio p.value
(name-only,name-only) - (name-only,name+photo) 0.0626 0.0203 895 3.091 0.0175
(name-only,name-only) - (name+age,name-only) -0.0695 0.0194 895 -3.579 0.0033
(name-only,name-only) - (name+age,name+photo) -0.0503 0.0198 895 -2.539 0.0831
(name-only,name-only) - (name+age+photo,name-only) -0.1057 0.0194 895 -5.436 <.0001
(name-only,name+photo) - (name+age,name-only) -0.1321 0.0198 895 -6.669 <.0001
(name-only,name+photo) - (name+age,name+photo) -0.1129 0.0202 895 -5.588 <.0001
(name-only,name+photo) - (name+age+photo,name-only) -0.1684 0.0198 895 -8.481 <.0001
(name+age,name-only) - (name+age,name+photo) 0.0191 0.0194 895 0.989 0.8605
(name+age,name-only) - (name+age+photo,name-only) -0.0363 0.0190 895 -1.911 0.3119
(name+age,name+photo) - (name+age+photo,name-only) -0.0554 0.0194 895 -2.856 0.0355
Results are averaged over the levels of: positive, vlnc, who
P value adjustment: tukey method for comparing a family of 5 estimates
Most of them siginificant.
Next, we founda main effect of the person who was actually characterized by the behavior:
mysumBy(value ~ who, dt = mmm)
Better for young targets.
Next, a main effect of valence of the behavior:
mysumBy(value ~ vlnc, dt = mmm)
More accurate for negative behaviors.
Next, the induction condition moderated the effect of who was actually characterized by the behavior:
knitr::kable(mysumBy(value ~ condition + who, dt = mmm))
| var | condition | who | n | M | SD | SE | med |
|---|---|---|---|---|---|---|---|
| value | name-only,name-only | n | 352 | 0.754 | 0.325 | 0.017 | 1.00 |
| value | name-only,name-only | o | 352 | 0.658 | 0.311 | 0.016 | 0.75 |
| value | name-only,name-only | y | 352 | 0.750 | 0.294 | 0.016 | 0.75 |
| value | name-only,name+photo | n | 326 | 0.613 | 0.398 | 0.022 | 0.75 |
| value | name-only,name+photo | o | 326 | 0.632 | 0.309 | 0.017 | 0.75 |
| value | name-only,name+photo | y | 326 | 0.729 | 0.277 | 0.015 | 0.75 |
| value | name+age,name-only | n | 390 | 0.749 | 0.333 | 0.017 | 1.00 |
| value | name+age,name-only | o | 390 | 0.799 | 0.255 | 0.013 | 1.00 |
| value | name+age,name-only | y | 390 | 0.822 | 0.247 | 0.012 | 1.00 |
| value | name+age,name+photo | n | 358 | 0.690 | 0.365 | 0.019 | 0.75 |
| value | name+age,name+photo | o | 358 | 0.798 | 0.231 | 0.012 | 0.75 |
| value | name+age,name+photo | y | 358 | 0.825 | 0.229 | 0.012 | 1.00 |
| value | name+age+photo,name-only | n | 384 | 0.753 | 0.322 | 0.016 | 1.00 |
| value | name+age+photo,name-only | o | 384 | 0.852 | 0.223 | 0.011 | 1.00 |
| value | name+age+photo,name-only | y | 384 | 0.873 | 0.207 | 0.010 | 1.00 |
No effects at all for the group affiliation of the positively portrayed target. People almost always had better memory for the young target’s behaviors, but memory for the old person’s behavior was not always better than for novel behaviors. Of particular note, in the [name-only, name-only] condition, the memory for the old target’s behavior was the worst, and the memory for the young target’s behaviors was not better than memory for the novel behaviors (of course, that could result from a bias to answer “none”, in that condition).
There were a couple of very small (but significant) effect. One was a three-way interaction between condition, who performed the behavior, and the behavior valence.
Let’s plot that interaction within each condition, to see where the difference is:
my.violin(DV = mmm$value, xFactor = mmm$who, fillFactor = mmm$vlnc, facet1 = mmm$condition)
I’m not sure. Perhaps if plot a bit differently:
my.violin(DV = mmm$value, xFactor = mmm$vlnc, fillFactor = mmm$who, facet1 = mmm$condition)
The patterns are different, but I don’t find a clear way to describe
anything interesting about it.
knitr::kable(mysumBy(value ~ condition + who + vlnc, dt = mmm))
| var | condition | who | vlnc | n | M | SD | SE | med |
|---|---|---|---|---|---|---|---|---|
| value | name-only,name-only | n | n | 176 | 0.780 | 0.313 | 0.023 | 1.00 |
| value | name-only,name-only | n | p | 176 | 0.727 | 0.336 | 0.025 | 0.75 |
| value | name-only,name-only | o | n | 176 | 0.723 | 0.315 | 0.023 | 0.75 |
| value | name-only,name-only | o | p | 176 | 0.594 | 0.294 | 0.022 | 0.75 |
| value | name-only,name-only | y | n | 176 | 0.776 | 0.297 | 0.022 | 1.00 |
| value | name-only,name-only | y | p | 176 | 0.724 | 0.290 | 0.022 | 0.75 |
| value | name-only,name+photo | n | n | 163 | 0.633 | 0.407 | 0.031 | 0.75 |
| value | name-only,name+photo | n | p | 163 | 0.592 | 0.389 | 0.030 | 0.75 |
| value | name-only,name+photo | o | n | 163 | 0.653 | 0.324 | 0.025 | 0.75 |
| value | name-only,name+photo | o | p | 163 | 0.610 | 0.293 | 0.023 | 0.75 |
| value | name-only,name+photo | y | n | 163 | 0.775 | 0.270 | 0.021 | 0.75 |
| value | name-only,name+photo | y | p | 163 | 0.683 | 0.278 | 0.021 | 0.75 |
| value | name+age,name-only | n | n | 195 | 0.781 | 0.322 | 0.023 | 1.00 |
| value | name+age,name-only | n | p | 195 | 0.718 | 0.342 | 0.024 | 0.75 |
| value | name+age,name-only | o | n | 195 | 0.833 | 0.243 | 0.017 | 1.00 |
| value | name+age,name-only | o | p | 195 | 0.764 | 0.263 | 0.019 | 0.75 |
| value | name+age,name-only | y | n | 195 | 0.842 | 0.257 | 0.018 | 1.00 |
| value | name+age,name-only | y | p | 195 | 0.801 | 0.235 | 0.017 | 0.75 |
| value | name+age,name+photo | n | n | 179 | 0.712 | 0.365 | 0.027 | 1.00 |
| value | name+age,name+photo | n | p | 179 | 0.668 | 0.365 | 0.027 | 0.75 |
| value | name+age,name+photo | o | n | 179 | 0.813 | 0.228 | 0.017 | 0.75 |
| value | name+age,name+photo | o | p | 179 | 0.784 | 0.233 | 0.017 | 0.75 |
| value | name+age,name+photo | y | n | 179 | 0.885 | 0.200 | 0.015 | 1.00 |
| value | name+age,name+photo | y | p | 179 | 0.764 | 0.240 | 0.018 | 0.75 |
| value | name+age+photo,name-only | n | n | 192 | 0.760 | 0.323 | 0.023 | 1.00 |
| value | name+age+photo,name-only | n | p | 192 | 0.746 | 0.322 | 0.023 | 0.75 |
| value | name+age+photo,name-only | o | n | 192 | 0.871 | 0.206 | 0.015 | 1.00 |
| value | name+age+photo,name-only | o | p | 192 | 0.832 | 0.237 | 0.017 | 1.00 |
| value | name+age+photo,name-only | y | n | 192 | 0.918 | 0.187 | 0.013 | 1.00 |
| value | name+age+photo,name-only | y | p | 192 | 0.828 | 0.217 | 0.015 | 1.00 |
Well, who knows. We could look at the whoXvalence interaction within each of the five conditions.
unique(mmm$condition)
[1] name+age,name+photo name+age,name-only name+age+photo,name-only name-only,name+photo name-only,name-only
Levels: name-only,name-only name-only,name+photo name+age,name-only name+age,name+photo name+age+photo,name-only
The confidence question was: How certain are you about this answer?‘, response options: ’guess’, ‘probably’, ‘certain’ (recoded as 1, 2, 3)
mcm <- mymelt(dt = allok, formula = conf.n.neg.1 + conf.n.neg.2 + conf.n.neg.3 + conf.n.pos.1 + conf.n.pos.2 + conf.n.pos.3 + conf.o.neg.1 + conf.o.neg.2 + conf.o.neg.3 + conf.o.pos.1 + conf.o.pos.2 + conf.o.pos.3 + conf.y.neg.1 + conf.y.neg.2 + conf.y.neg.3 + conf.y.pos.1 + conf.y.pos.2 + conf.y.pos.3 ~ session_id + condition + positive)
mcm$who <- substr(mcm$variable, 6, 6)
mcm$vlnc <- substr(mcm$variable, 8, 8)
mcm$conf <- substr(mcm$variable, 12, 12)
mcm <- mcm[complete.cases(mcm), ]
Memory accuracy by confidence:
mysumBy(value ~ conf, dt = mcm)
As can be expected, more accuracy when people are more confident about their response.
Soon…
Planned tests: Examine each attribute question, examine negative attribute ratings and positive attribute ratings separately, examine effect of stimuli (specific names, identities, ages), are the results more promising when excluding participants who did not report perfect English,
We saved the conditions of the study within different tasks. Other times, it is possible to infer the condition from the data. For example, if we saved a photo data in the IAT, then we can infer that the paricipant observed a photo. There could be some inconsistencies due to technical problems. I think that the main reason is refreshing the page. It is supposed to cause an error an kill the study, but perhaps clicking Back in the browser, sometimes restarts the study session, possibly with different conditions. In this section, we will examine whether the conditions saved in the sample are consistent.
In the induction task, we saved the condition by which we determined what to show to participants. Let’s make sure it was the same condition as the one recorded at the begining of the task.
table(indPart1Cond = allok$indPart1Cond, part1Cond = allok$part1Cond, exclude = NULL)
part1Cond
indPart1Cond name-only name+age name+age+photo
name-only 349 0 0
name+age 0 380 0
name+age+photo 0 0 196
Perfect.
From the induction, we can infer whether the old or the yng target individual was the positive or the negative target, based on the number of positive (and negative) behaviors presented in the induction task
table(ind.oGood = allok$ind.oGood, oldVlnc = allok$oldVlnc, exclude = NULL)
oldVlnc
ind.oGood b g
FALSE 459 0
TRUE 0 466
Good. and for the young:
table(ind.yGood = allok$ind.yGood, oldVlnc = allok$oldVlnc, exclude = NULL)
oldVlnc
ind.yGood b g
FALSE 0 466
TRUE 459 0
Good.
I also counted the number of positive and negative behaviors presented in the induction, based on the data recorded in that task. Old target:
table(old.bvs = allok$o.bvs, young.bvs = allok$y.bvs, exclude = NULL)
young.bvs
old.bvs 4-8 8-4
4-8 0 459
8-4 466 0
All good.
In the matching task, we saved the names and face identity of the targets, based on what has been shown. It was also saved at the beginning of the task, after assigning those stimuli. Let’s test for consistency:
table(match.oName = allok$match.oName.y, oName = allok$oName, exclude = NULL)
oName
match.oName Andy Chris Daniel Jack James John Matt Michael Nick
Andy 118 0 0 0 0 0 0 0 0
Chris 0 105 0 0 0 0 0 0 0
Daniel 0 0 105 0 0 0 0 0 0
Jack 0 0 0 96 0 0 0 0 0
James 0 0 0 0 97 0 0 0 0
John 0 0 0 0 0 88 0 0 0
Matt 0 0 0 0 0 0 125 0 0
Michael 0 0 0 0 0 0 0 92 0
Nick 0 0 0 0 0 0 0 0 99
All seems fine.
table(match.yName = allok$match.yName.y, oName = allok$yName, exclude = NULL)
oName
match.yName Andy Chris Daniel Jack James John Matt Michael Nick
Andy 105 0 0 0 0 0 0 0 0
Chris 0 101 0 0 0 0 0 0 0
Daniel 0 0 104 0 0 0 0 0 0
Jack 0 0 0 112 0 0 0 0 0
James 0 0 0 0 102 0 0 0 0
John 0 0 0 0 0 99 0 0 0
Matt 0 0 0 0 0 0 103 0 0
Michael 0 0 0 0 0 0 0 95 0
Nick 0 0 0 0 0 0 0 0 104
All seems fine.
From the IAT, we can learn whether photos or names were presented.
table(IAT.stims = allok$iat.vCond, stimCond = allok$part3Cond, exclude = NULL)
stimCond
IAT.stims name-only name+photo
photo 0 351
word 574 0
All ok.
From the attribute questionnaire, we saved the Photo identity chosen (also saved in the matching task)
table(att.old.id = allok$att.man.id.o, match.old.id = allok$match.oFace.y, exclude = NULL)
match.old.id
att.old.id old_a old_b old_c old_d old_e old_f
old_a 140 0 0 0 0 0
old_b 0 162 0 0 0 0
old_c 0 0 133 0 0 0
old_d 0 0 0 154 0 0
old_e 0 0 0 0 170 0
old_f 0 0 0 0 0 166
table(att.yng.id = allok$att.man.id.y, match.yng.id = allok$match.yFace.y, exclude = NULL)
match.yng.id
att.yng.id young_a young_b young_c young_d young_e young_f
young_a 162 0 0 0 0 0
young_b 0 151 0 0 0 0
young_c 0 0 173 0 0 0
young_d 0 0 0 150 0 0
young_e 0 0 0 0 152 0
young_f 0 0 0 0 0 137
And the names:
table(att.old.name = allok$att.man.name.o, old.name = allok$oName, exclude = TRUE)
old.name
att.old.name Andy Chris Daniel Jack James John Matt Michael Nick
Andy 118 0 0 0 0 0 0 0 0
Chris 0 105 0 0 0 0 0 0 0
Daniel 0 0 105 0 0 0 0 0 0
Jack 0 0 0 96 0 0 0 0 0
James 0 0 0 0 97 0 0 0 0
John 0 0 0 0 0 88 0 0 0
Matt 0 0 0 0 0 0 125 0 0
Michael 0 0 0 0 0 0 0 92 0
Nick 0 0 0 0 0 0 0 0 99
table(att.yng.name = allok$att.man.name.y, yng.name = allok$yName, exclude = TRUE)
yng.name
att.yng.name Andy Chris Daniel Jack James John Matt Michael Nick
Andy 105 0 0 0 0 0 0 0 0
Chris 0 101 0 0 0 0 0 0 0
Daniel 0 0 104 0 0 0 0 0 0
Jack 0 0 0 112 0 0 0 0 0
James 0 0 0 0 102 0 0 0 0
John 0 0 0 0 0 99 0 0 0
Matt 0 0 0 0 0 0 103 0 0
Michael 0 0 0 0 0 0 0 95 0
Nick 0 0 0 0 0 0 0 0 104
Let’s also make sure all the name combinations occurred, but never using the same name for both targets:
table(old.name = allok$oName, yng.name = allok$yName, exclude = TRUE)
yng.name
old.name Andy Chris Daniel Jack James John Matt Michael Nick
Andy 0 18 18 11 15 13 13 16 14
Chris 14 0 12 14 14 13 12 13 13
Daniel 12 10 0 18 12 10 17 8 18
Jack 14 8 4 0 15 16 15 11 13
James 13 15 15 11 0 13 13 9 8
John 8 7 17 8 8 0 9 13 18
Matt 14 21 18 18 14 17 0 11 12
Michael 13 14 9 14 13 8 13 0 8
Nick 17 8 11 18 11 9 11 14 0
ok.
In the attributes questionnaire, we also saved the evaluation condition (with or without photos)
table(part3Cond = allok$part3Cond, attCond = allok$attPart3Cond, exclude = NULL)
attCond
part3Cond name-only name+photo
name-only 574 0
name+photo 0 351
ok ok.
One odd thing in the correlation matrix above was that the correlation between the IAT and self-reported preference changed when we correlated preference for the positively-portrayed target vs. when we correlated the preference for the young target. one was a recoding of the other (dependended on the induction condition), so it seems reasonable to expect the same correlation. To verify that no mistake was made, let’s see those correlations within each induction condition.
First, when the old target was the positively-portrayed target.
my.htmlTable(cornp(allok[which(allok$oldVlnc == "g"), c("IAT.gb", "IAT.prf", "eval.diff", "eval.gb")]))
| varName____ | IAT.gb____ | IAT.prf____ | eval.diff____ | |
|---|---|---|---|---|
| 1 | IAT.prf |
-1 < .001 466 |
||
| 2 | eval.diff |
-0.213 < .001 466 |
0.213 < .001 466 |
|
| 3 | eval.gb |
0.213 < .001 466 |
-0.213 < .001 466 |
-1 < .001 466 |
Second, when the old target was the negatively-portrayed target:
my.htmlTable(cornp(allok[which(allok$oldVlnc == "b"), c("IAT.gb", "IAT.prf", "eval.diff", "eval.gb")]))
| varName____ | IAT.gb____ | IAT.prf____ | eval.diff____ | |
|---|---|---|---|---|
| 1 | IAT.prf |
1 < .001 459 |
||
| 2 | eval.diff |
0.113 0.016 459 |
0.113 0.016 459 |
|
| 3 | eval.gb |
0.113 0.016 459 |
0.113 0.016 459 |
1 < .001 459 |
The important correlations were within the IAT scores and within the eval (self-report) scores. The -1 and 1 correlations show that the recoding was as expected. I am not entirely sure wha the reason for the difference in correlations is, when looking at the whole sample:
my.htmlTable(cornp(allok[, c("IAT.gb", "IAT.prf", "eval.diff", "eval.gb")]))
| varName____ | IAT.gb____ | IAT.prf____ | eval.diff____ | |
|---|---|---|---|---|
| 1 | IAT.prf |
-0.021 0.52 925 |
||
| 2 | eval.diff |
-0.008 0.816 925 |
0.237 < .001 925 |
|
| 3 | eval.gb |
0.171 < .001 925 |
-0.039 0.237 925 |
0.001 0.974 925 |
.237 vs. 0.171. Perhaps this is related to SD. Let’s verify that it doesn’t happen with Spearman correlation (i.e., the ranking stays the same)
my.htmlTable(cornp(allok[, c("IAT.gb", "IAT.prf", "eval.diff", "eval.gb")], type = "spearman"))
| varName____ | IAT.gb____ | IAT.prf____ | eval.diff____ | |
|---|---|---|---|---|
| 1 | IAT.prf |
0.002 0.962 925 |
||
| 2 | eval.diff |
0.025 0.45 925 |
0.233 < .001 925 |
|
| 3 | eval.gb |
0.164 < .001 925 |
-0.027 0.411 925 |
0.014 0.66 925 |
No. So it’s unrelated to SD.
Let’s simulate data and implement the same recoding, to make sure this can happen
df <- data.frame(sid = c(1:1000), one = rnorm(n = 1000, mean = 0.33, sd = 0.397), cond = rep(c(2, 1), 500))
df$two <- rnorm(n = nrow(df), mean = 3.779, sd = 53.916) + (df$one * 25)
df$one2 = ifelse(df$cond == 1, df$one, -1 * df$one)
df$two2 = ifelse(df$cond == 1, df$two, -1 * df$two)
my.htmlTable(cornp(df))
| varName____ | sid____ | one____ | cond____ | two____ | one2____ | |
|---|---|---|---|---|---|---|
| 1 | one |
-0.006 0.845 1000 |
||||
| 2 | cond |
-0.002 0.956 1000 |
-0.029 0.356 1000 |
|||
| 3 | two |
-0.039 0.214 1000 |
0.179 < .001 1000 |
0.026 0.414 1000 |
||
| 4 | one2 |
0.021 0.513 1000 |
0.031 0.334 1000 |
-0.634 < .001 1000 |
-0.029 0.364 1000 |
|
| 5 | two2 |
-0.042 0.181 1000 |
-0.011 0.739 1000 |
-0.177 < .001 1000 |
0.015 0.638 1000 |
0.249 < .001 1000 |
The comparison of interest is between the correlation of one with two and the correlation of one2 with two2. We can see that these are not the same correlations.
The age bias in the IAT was quite small, in comparison to previous studies.
in none of the 5 conditions, the age-bias was strong in both IAT block-order conditions. This is further evidence that we did not obtain strong and reliable age bias with the IAT.
There was some surprising age bias in the self-reported evaluation: in the [name-only,name-only] (d=0.33) and [name+age+photo,name-only] (d=0.49) conditions. Oddly, these conditions did not show age-bias in the IAT. It is difficult to understand what happened here, but might be worth replicating.
People had the best memory for behaviors attributed to the young target, followed by the old target, and then novel behaviors. That difference did not occur in the [name-only, name-only] baseline condition. People had more accurate memory, the more individuating information appeared in the induction task. We did not find evidence that individuating information in the memory test (retrieval) helped, but we also did not have conditions that provided those cues in both encoding and retrieval (not sure there is a reason to expect that cues at retrieval that did not appear in encoding would improve memory).